Menu Top
Complete Course of Mathematics
Topic 1: Numbers & Numerical Applications Topic 2: Algebra Topic 3: Quantitative Aptitude
Topic 4: Geometry Topic 5: Construction Topic 6: Coordinate Geometry
Topic 7: Mensuration Topic 8: Trigonometry Topic 9: Sets, Relations & Functions
Topic 10: Calculus Topic 11: Mathematical Reasoning Topic 12: Vectors & Three-Dimensional Geometry
Topic 13: Linear Programming Topic 14: Index Numbers & Time-Based Data Topic 15: Financial Mathematics
Topic 16: Statistics & Probability


Content On This Page
Inverse of a Matrix (Calculation and Properties) Solution of a System of Linear Equations using Matrix Inverse Method Solving Systems of Linear Equations using Determinants (Cramer's Rule)


Inverse of a Matrix and Systems of Equations



Inverse of a Matrix (Calculation and Properties)

As previously introduced, a square matrix $A$ of order $n \times n$ is called invertible or non-singular if there exists another square matrix $B$ of the same order $n \times n$ such that their products $AB$ and $BA$ both result in the identity matrix of order $n$, denoted $I_n$. That is, $AB = BA = I_n$. The matrix $B$ is then called the inverse of $A$ and is uniquely denoted by $A^{-1}$. If such a matrix $B$ does not exist, the matrix $A$ is called singular or non-invertible.

Finding the inverse of a matrix is a crucial task in linear algebra with applications in solving systems of equations, performing transformations, and other matrix manipulations.


Methods to Calculate the Inverse of a Matrix

There are several methods to compute the inverse of a square matrix $A$, provided that the inverse exists (i.e., $A$ is non-singular). Two of the most commonly used methods are:

  1. Using the Adjoint of the matrix.
  2. Using Elementary Row Operations (also known as the Gauss-Jordan elimination method).

Method 1: Using the Adjoint

This method relies on the relationship between a matrix, its adjoint, and its determinant, which we established in the previous section. For any square matrix $A$ of order $n$, the following identity holds:

$A (\text{adj } A) = (\text{adj } A) A = \det(A) I_n $

... (i)

where $\text{adj}(A)$ is the adjoint of $A$ and $I_n$ is the identity matrix of order $n$.

If the determinant of matrix $A$ is non-zero ($\det(A) \neq 0$), then the matrix is invertible. In this case, we can divide the entire identity (i) by the scalar value $\det(A)$:

$$ \frac{1}{\det(A)} [A (\text{adj } A)] = \frac{1}{\det(A)} [(\text{adj } A) A] = \frac{1}{\det(A)} [\det(A) I_n] $$

Using the properties of scalar multiplication with matrices, the left and middle terms can be rewritten as:

$$ A \left( \frac{1}{\det(A)} \text{adj } A \right) = \left( \frac{1}{\det(A)} \text{adj } A \right) A $$

The right term simplifies to:

$$ \frac{\det(A)}{\det(A)} I_n = 1 \cdot I_n = I_n $$

So, we have the equation:

$A \left( \frac{1}{\det(A)} \text{adj } A \right) = \left( \frac{1}{\det(A)} \text{adj } A \right) A = I_n $

... (ii)

Comparing equation (ii) with the definition of the inverse matrix ($A B = B A = I_n$ where $B = A^{-1}$), we can identify the matrix $B$ that serves as the inverse of $A$. This matrix is $\frac{1}{\det(A)} \text{adj } A$.

Thus, the formula for the inverse of an invertible matrix $A$ is:

$A^{-1} = \frac{1}{\det(A)} \text{adj}(A) $

... (iii)

This formula clearly shows that the inverse of matrix $A$ exists if and only if its determinant $\det(A)$ is non-zero. If $\det(A) = 0$, the matrix $A$ is singular, and division by zero is not defined, indicating that the inverse does not exist.

Steps to find the inverse using the Adjoint method:

  1. Calculate the determinant of the matrix $A$. If $\det(A) = 0$, the matrix is singular, and the inverse does not exist. Stop the process.
  2. If $\det(A) \neq 0$, calculate the cofactor $C_{ij}$ for every element $a_{ij}$ of the matrix $A$.
  3. Form the matrix of cofactors, $\text{cof}(A) = [C_{ij}]$.
  4. Find the adjoint of $A$ by taking the transpose of the matrix of cofactors: $\text{adj}(A) = (\text{cof}(A))^T$.
  5. Multiply the adjoint matrix by the scalar $\frac{1}{\det(A)}$ to get the inverse: $A^{-1} = \frac{1}{\det(A)} \text{adj}(A)$.

Example of Finding Inverse using Adjoint (for 3x3 Matrix)

Example 1. Find the inverse of the matrix $A = \begin{bmatrix} 1 & 2 & 3 \\ 0 & 1 & 4 \\ 0 & 0 & 1 \end{bmatrix}$ using the adjoint method.

Answer:

The given matrix is $A = \begin{bmatrix} 1 & 2 & 3 \\ 0 & 1 & 4 \\ 0 & 0 & 1 \end{bmatrix}$. It is a square matrix of order $3 \times 3$.

1. Calculate the determinant of $A$. Since $A$ is an upper triangular matrix, its determinant is the product of the elements on its main diagonal:

$$ \det(A) = \begin{vmatrix} 1 & 2 & 3 \\ 0 & 1 & 4 \\ 0 & 0 & 1 \end{vmatrix} = 1 \times 1 \times 1 = 1 $$

Since $\det(A) = 1 \neq 0$, the matrix $A$ is non-singular, and its inverse exists.

2. Find the matrix of cofactors $C_{ij} = (-1)^{i+j} M_{ij}$.

  • $C_{11} = (-1)^{1+1} \begin{vmatrix} 1 & 4 \\ 0 & 1 \end{vmatrix} = (+1)(1 \times 1 - 0 \times 4) = 1 - 0 = 1$.
  • $C_{12} = (-1)^{1+2} \begin{vmatrix} 0 & 4 \\ 0 & 1 \end{vmatrix} = (-1)(0 \times 1 - 0 \times 4) = (-1)(0) = 0$.
  • $C_{13} = (-1)^{1+3} \begin{vmatrix} 0 & 1 \\ 0 & 0 \end{vmatrix} = (+1)(0 \times 0 - 0 \times 1) = 0 - 0 = 0$.
  • $C_{21} = (-1)^{2+1} \begin{vmatrix} 2 & 3 \\ 0 & 1 \end{vmatrix} = (-1)(2 \times 1 - 0 \times 3) = (-1)(2) = -2$.
  • $C_{22} = (-1)^{2+2} \begin{vmatrix} 1 & 3 \\ 0 & 1 \end{vmatrix} = (+1)(1 \times 1 - 0 \times 3) = 1 - 0 = 1$.
  • $C_{23} = (-1)^{2+3} \begin{vmatrix} 1 & 2 \\ 0 & 0 \end{vmatrix} = (-1)(1 \times 0 - 0 \times 2) = (-1)(0) = 0$.
  • $C_{31} = (-1)^{3+1} \begin{vmatrix} 2 & 3 \\ 1 & 4 \end{vmatrix} = (+1)(2 \times 4 - 1 \times 3) = 8 - 3 = 5$.
  • $C_{32} = (-1)^{3+2} \begin{vmatrix} 1 & 3 \\ 0 & 4 \end{vmatrix} = (-1)(1 \times 4 - 0 \times 3) = (-1)(4) = -4$.
  • $C_{33} = (-1)^{3+3} \begin{vmatrix} 1 & 2 \\ 0 & 1 \end{vmatrix} = (+1)(1 \times 1 - 0 \times 2) = 1 - 0 = 1$.

The matrix of cofactors is:

$$ \text{cof}(A) = \begin{bmatrix} C_{11} & C_{12} & C_{13} \\ C_{21} & C_{22} & C_{23} \\ C_{31} & C_{32} & C_{33} \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ -2 & 1 & 0 \\ 5 & -4 & 1 \end{bmatrix} $$

3. Find the adjoint of $A$ by taking the transpose of the cofactor matrix:

$$ \text{adj}(A) = (\text{cof}(A))^T = \begin{bmatrix} 1 & 0 & 0 \\ -2 & 1 & 0 \\ 5 & -4 & 1 \end{bmatrix}^T = \begin{bmatrix} 1 & -2 & 5 \\ 0 & 1 & -4 \\ 0 & 0 & 1 \end{bmatrix} $$

4. Multiply the adjoint matrix by $\frac{1}{\det(A)}$. Since $\det(A) = 1$, $\frac{1}{\det(A)} = \frac{1}{1} = 1$.

$$ A^{-1} = \frac{1}{\det(A)} \text{adj}(A) = 1 \times \begin{bmatrix} 1 & -2 & 5 \\ 0 & 1 & -4 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & -2 & 5 \\ 0 & 1 & -4 \\ 0 & 0 & 1 \end{bmatrix} $$

The inverse of matrix $A$ is $\begin{bmatrix} 1 & -2 & 5 \\ 0 & 1 & -4 \\ 0 & 0 & 1 \end{bmatrix}$.

Answer: $A^{-1} = \begin{bmatrix} 1 & -2 & 5 \\ 0 & 1 & -4 \\ 0 & 0 & 1 \end{bmatrix}$.

Verification:

We can verify the result by checking if $A A^{-1} = I_3$ and $A^{-1} A = I_3$.

$$ A A^{-1} = \begin{bmatrix} 1 & 2 & 3 \\ 0 & 1 & 4 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & -2 & 5 \\ 0 & 1 & -4 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} (1\cdot 1 + 2\cdot 0 + 3\cdot 0) & (1\cdot -2 + 2\cdot 1 + 3\cdot 0) & (1\cdot 5 + 2\cdot -4 + 3\cdot 1) \\ (0\cdot 1 + 1\cdot 0 + 4\cdot 0) & (0\cdot -2 + 1\cdot 1 + 4\cdot 0) & (0\cdot 5 + 1\cdot -4 + 4\cdot 1) \\ (0\cdot 1 + 0\cdot 0 + 1\cdot 0) & (0\cdot -2 + 0\cdot 1 + 1\cdot 0) & (0\cdot 5 + 0\cdot -4 + 1\cdot 1) \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} = I_3 $$

Calculating $A^{-1} A$ similarly will also yield $I_3$.

Method 2: Using Elementary Row Operations (Gauss-Jordan Method)

This method, based on elementary row operations, is a powerful and commonly used technique for finding the inverse of a matrix, especially suitable for implementation in computer algorithms. The idea is to apply a sequence of elementary row operations to the given square matrix $A$ to transform it into the identity matrix $I_n$. The same sequence of elementary row operations is simultaneously applied to the identity matrix $I_n$ of the same order. If $A$ can be reduced to $I_n$, then the matrix that $I_n$ is transformed into is the inverse of $A$, i.e., $A^{-1}$.

Procedure to find the inverse using elementary row operations:

  1. Form the Augmented Matrix: Write down the given square matrix $A$ alongside the identity matrix $I_n$ of the same order, forming an augmented matrix $[A | I_n]$.
  2. Apply Elementary Row Operations: Perform a sequence of elementary row operations on the entire augmented matrix $[A | I_n]$. The goal is to transform the left side (matrix $A$) into the identity matrix $I_n$. The allowed elementary row operations are:
    • Interchanging any two rows ($R_i \leftrightarrow R_j$).
    • Multiplying all elements of a row by a non-zero scalar ($R_i \to k R_i$, where $k \neq 0$).
    • Adding a scalar multiple of one row to another row ($R_i \to R_i + k R_j$, where $k \neq 0$).
    Apply these operations systematically, usually aiming to get 1s on the main diagonal and 0s elsewhere in the left part of the augmented matrix (this process is called Gauss-Jordan elimination when applied to the identity matrix simultaneously).
  3. Identify the Inverse: If the left side of the augmented matrix is successfully transformed into the identity matrix $I_n$, the matrix that appears on the right side (which started as $I_n$) is the inverse of $A$, i.e., $A^{-1}$. The final augmented matrix will be in the form $[I_n | A^{-1}]$.
  4. Singular Matrix Case: If, at any stage of applying elementary row operations, you obtain a row consisting entirely of zeros on the left side of the augmented matrix, it means that the matrix $A$ is singular (its determinant is 0), and its inverse does not exist. In this case, the process stops, and we conclude that $A^{-1}$ does not exist.

Example of Finding Inverse using Elementary Row Operations

Example 2. Find the inverse of the matrix $A = \begin{bmatrix} 1 & 2 \\ 2 & 5 \end{bmatrix}$ using elementary row operations.

Answer:

The given matrix is $A = \begin{bmatrix} 1 & 2 \\ 2 & 5 \end{bmatrix}$. It is a $2 \times 2$ matrix. We start with the augmented matrix $[A | I_2]$:

$$ [A | I_2] = \begin{bmatrix} 1 & 2 & | & 1 & 0 \\ 2 & 5 & | & 0 & 1 \end{bmatrix} $$

We will apply elementary row operations to transform the left side into $\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$.

Step 1: The element in the first row, first column is already 1. This is our first pivot.

Step 2: Make the element below the pivot in the first column zero. We want to make the '2' in the second row, first column zero. Perform the operation $R_2 \to R_2 - 2R_1$ (Subtract 2 times the first row from the second row).

$$ \begin{bmatrix} 1 & 2 & | & 1 & 0 \\ 2 - 2(1) & 5 - 2(2) & | & 0 - 2(1) & 1 - 2(0) \end{bmatrix} $$ $$ = \begin{bmatrix} 1 & 2 & | & 1 & 0 \\ 0 & 5 - 4 & | & -2 & 1 \end{bmatrix} $$ $$ \xrightarrow{R_2 \to R_2 - 2R_1} \begin{bmatrix} 1 & 2 & | & 1 & 0 \\ 0 & 1 & | & -2 & 1 \end{bmatrix} $$

Step 3: Move to the next column and make the diagonal element (pivot) equal to 1. The element in the second row, second column is already 1. This is our second pivot.

Step 4: Make the element above the pivot in the second column zero. We want to make the '2' in the first row, second column zero. Perform the operation $R_1 \to R_1 - 2R_2$ (Subtract 2 times the second row from the first row).

$$ \begin{bmatrix} 1 - 2(0) & 2 - 2(1) & | & 1 - 2(-2) & 0 - 2(1) \\ 0 & 1 & | & -2 & 1 \end{bmatrix} $$ $$ = \begin{bmatrix} 1 - 0 & 2 - 2 & | & 1 - (-4) & 0 - 2 \end{bmatrix} $$ $$ = \begin{bmatrix} 1 & 0 & | & 1 + 4 & -2 \end{bmatrix} $$ $$ \xrightarrow{R_1 \to R_1 - 2R_2} \begin{bmatrix} 1 & 0 & | & 5 & -2 \\ 0 & 1 & | & -2 & 1 \end{bmatrix} $$

The left side of the augmented matrix is now the identity matrix $I_2$. The matrix on the right side is the inverse of $A$.

$$ A^{-1} = \begin{bmatrix} 5 & -2 \\ -2 & 1 \end{bmatrix} $$

Answer: $A^{-1} = \begin{bmatrix} 5 & -2 \\ -2 & 1 \end{bmatrix}$.

Verification:

We can check this result using the adjoint method for a $2 \times 2$ matrix $A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$. The inverse is $A^{-1} = \frac{1}{\det(A)} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}$.

For $A = \begin{bmatrix} 1 & 2 \\ 2 & 5 \end{bmatrix}$, the determinant is $\det(A) = (1 \times 5) - (2 \times 2) = 5 - 4 = 1$.

The adjoint is $\text{adj}(A) = \begin{bmatrix} 5 & -2 \\ -2 & 1 \end{bmatrix}$.

So, $A^{-1} = \frac{1}{1} \begin{bmatrix} 5 & -2 \\ -2 & 1 \end{bmatrix} = \begin{bmatrix} 5 & -2 \\ -2 & 1 \end{bmatrix}$. Both methods give the same correct result.


Properties of the Inverse of a Matrix

For invertible square matrices, the inverse operation follows certain rules. Assuming $A$ and $B$ are invertible matrices of the same order, and $k$ is a non-zero scalar, the following properties hold:

1. Inverse of the Inverse: $(A^{-1})^{-1} = A$

Explanation: Taking the inverse of the inverse of a matrix returns the original matrix. This is analogous to taking the reciprocal of a reciprocal for numbers, e.g., $(1/a)^{-1} = a$.

Derivation: Let $B = A^{-1}$. By definition, $AB = BA = I$. We want to find the inverse of $B$, denoted as $(A^{-1})^{-1}$. By the definition of an inverse, the inverse of $B$ is the matrix $C$ such that $BC = CB = I$. Substituting $B = A^{-1}$, we need $A^{-1} C = C A^{-1} = I$. From the original definition of $A^{-1}$, we know that $A^{-1} A = A A^{-1} = I$. Comparing $A^{-1} C = I$ with $A^{-1} A = I$, and $C A^{-1} = I$ with $A A^{-1} = I$, it is clear that $C=A$ is the matrix that satisfies the condition $BC = CB = I$. Therefore, $(A^{-1})^{-1} = A$.

2. Inverse of a Product (Reversal Law): $(AB)^{-1} = B^{-1} A^{-1}$

Explanation: The inverse of the product of two invertible matrices (of the same order) is the product of their individual inverses, but in the reverse order of the original matrices. This is a very important property and was derived in the previous section on the definition of invertible matrices.

Proof (Recap): We need to show that $(AB)(B^{-1}A^{-1}) = I$ and $(B^{-1}A^{-1})(AB) = I$. Using associativity: $(AB)(B^{-1}A^{-1}) = A(B B^{-1})A^{-1} = A I A^{-1} = A A^{-1} = I$. And $(B^{-1}A^{-1})(AB) = B^{-1}(A^{-1} A)B = B^{-1} I B = B^{-1} B = I$. Since the product in both orders is $I$, $B^{-1}A^{-1}$ is the inverse of $AB$.

3. Inverse of the Transpose: $(A^T)^{-1} = (A^{-1})^T$

Explanation: The inverse of the transpose of a matrix is equal to the transpose of its inverse.

Derivation: We know that $A A^{-1} = I$. Taking the transpose of both sides of this equation and using the reversal law for transpose $(XY)^T = Y^T X^T$, and the property $I^T = I$ (the transpose of the identity matrix is the identity matrix):

$$ (A A^{-1})^T = I^T $$ $$ (A^{-1})^T A^T = I $$

Similarly, taking the transpose of $A^{-1} A = I$ gives $(A^{-1} A)^T = I^T$, so $A^T (A^{-1})^T = I$.

Since $(A^{-1})^T A^T = A^T (A^{-1})^T = I$, by the definition of the inverse, the matrix $(A^{-1})^T$ is the inverse of $A^T$. Thus, $(A^T)^{-1} = (A^{-1})^T$.

4. Inverse of a Scalar Multiple: $(kA)^{-1} = \frac{1}{k} A^{-1}$ (for $k \neq 0$)

Explanation: For any non-zero scalar $k$, the inverse of the matrix $kA$ is found by taking the reciprocal of the scalar $\frac{1}{k}$ and multiplying it by the inverse of the original matrix $A^{-1}$.

Derivation: We want to show that the matrix $\frac{1}{k} A^{-1}$ is the inverse of the matrix $kA$. To do this, we check their product in both orders and see if it equals the identity matrix $I$.

Consider the first product: $(kA) \left(\frac{1}{k} A^{-1}\right)$

$$ (kA) \left(\frac{1}{k} A^{-1}\right) $$

Using the associative property of matrix multiplication and scalar multiplication properties $(pM)(qN) = (pq)(MN)$ where $p,q$ are scalars:

$$ = \left(k \cdot \frac{1}{k}\right) (A A^{-1}) \quad \text{(Using associative property and scalar properties)} $$

Since $k \neq 0$, the product $k \cdot \frac{1}{k} = 1$. From the definition of the inverse, $A A^{-1} = I$. Substitute these:

$$ = (1) I = I \quad \text{(Since } k \cdot \frac{1}{k} = 1 \text{ and } A A^{-1} = I \text{)} $$

The first product is $I$. Now consider the product in the reverse order:

Consider the second product: $\left(\frac{1}{k} A^{-1}\right) (kA)$

$$ \left(\frac{1}{k} A^{-1}\right) (kA) $$

Using the associative property and scalar multiplication properties:

$$ = \left(\frac{1}{k} \cdot k\right) (A^{-1} A) = (1) I = I $$

Since $(kA)(\frac{1}{k} A^{-1}) = (\frac{1}{k} A^{-1})(kA) = I$, by the definition of the inverse, the matrix $\frac{1}{k} A^{-1}$ is the inverse of the matrix $kA$. Thus, $(kA)^{-1} = \frac{1}{k} A^{-1}$.

5. Determinant of the Inverse: $\det(A^{-1}) = \frac{1}{\det(A)}$

Explanation: The determinant of the inverse matrix is equal to the reciprocal of the determinant of the original matrix. This important property was derived in the previous section on properties of determinants.

Proof (Recap): From the definition of the inverse, $A A^{-1} = I$. Taking the determinant of both sides and using the property that the determinant of a product is the product of determinants ($\det(XY) = \det(X)\det(Y)$), and that the determinant of the identity matrix is 1 ($\det(I)=1$):

$$ \det(A A^{-1}) = \det(I) $$ $$ \det(A) \det(A^{-1}) = 1 $$

Since $A$ is invertible, its determinant $\det(A)$ is non-zero. Therefore, we can divide both sides by $\det(A)$ to get:

$$ \det(A^{-1}) = \frac{1}{\det(A)} $$

These properties are essential for understanding the behavior of invertible matrices under various operations and are frequently used in theoretical and applied linear algebra.



Solution of a System of Linear Equations using Matrix Inverse Method

Matrices provide a powerful and systematic framework for representing and solving systems of linear equations. For systems where the number of equations equals the number of variables, and the coefficient matrix is invertible, the inverse matrix method offers a direct approach to finding the unique solution.


Matrix Representation of a System of Linear Equations

Consider a general system of $n$ linear equations in $n$ variables $x_1, x_2, \dots, x_n$:

$$ a_{11}x_1 + a_{12}x_2 + \dots + a_{1n}x_n = b_1 $$

$$ a_{21}x_1 + a_{22}x_2 + \dots + a_{2n}x_n = b_2 $$

$$ \vdots $$

$$ a_{n1}x_1 + a_{n2}x_2 + \dots + a_{nn}x_n = b_n $$

This system can be written concisely in the form of a single matrix equation $AX = B$, where:

$A$ is the coefficient matrix, which is an $n \times n$ matrix containing the coefficients of the variables:

$$ A = \begin{bmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \dots & a_{nn} \end{bmatrix}_{n \times n} $$

$X$ is the variable matrix (or column vector), which is an $n \times 1$ matrix containing the variables:

$$ X = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}_{n \times 1} $$

$B$ is the constant matrix (or column vector), which is an $n \times 1$ matrix containing the constant terms on the right side of the equations:

$$ B = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix}_{n \times 1} $$

The matrix equation $AX = B$ represents the entire system of $n$ linear equations in $n$ variables. Performing the matrix multiplication on the left side $A X$ will result in an $n \times 1$ matrix whose elements are the left sides of the system of equations. Equating this to the $n \times 1$ matrix $B$ gives the original system.


Solving the System using Matrix Inverse

The matrix inverse method for solving the system $AX=B$ is applicable when the coefficient matrix $A$ is invertible. Recall that a square matrix $A$ is invertible if and only if its determinant is non-zero ($\det(A) \neq 0$). If $A$ is invertible, its unique inverse $A^{-1}$ exists.

Consider the matrix equation:

$$ AX = B \quad \text{... (v)} $$

Since $A$ is invertible, its inverse $A^{-1}$ exists. We can "isolate" the variable matrix $X$ by multiplying both sides of the equation by $A^{-1}$. It is crucial to perform the multiplication on the same side of both $AX$ and $B$. We choose to pre-multiply (multiply from the left) both sides by $A^{-1}$:

$$ A^{-1}(AX) = A^{-1}B $$

Using the associative property of matrix multiplication, we can regroup the matrices on the left side:

$$ (A^{-1}A)X = A^{-1}B $$

By the definition of the inverse matrix, the product of a matrix and its inverse is the identity matrix: $A^{-1}A = I_n$. Substitute $I_n$ into the equation:

$$ I_n X = A^{-1}B $$

Multiplying any matrix $X$ by the identity matrix $I_n$ of compatible size results in the original matrix $X$ ($I_n X = X$). So,

$X = A^{-1}B $

... (vi)

This equation $X = A^{-1}B$ gives the solution for the variable matrix $X$. Once the inverse $A^{-1}$ is calculated (using the adjoint method or elementary row operations) and then multiplied by the constant matrix $B$, the resulting matrix $X$ will contain the values of the variables $x_1, x_2, \dots, x_n$ that satisfy the system of equations.

Conditions for Solutions:

The existence and uniqueness of the solution to the system $AX=B$ depend on the determinant of the coefficient matrix $A$:

1. If $\det(A) \neq 0$: The matrix $A$ is non-singular (invertible). In this case, the system $AX = B$ has a unique solution given by the formula $X = A^{-1}B$. A system with a unique solution is called a consistent system.

2. If $\det(A) = 0$: The matrix $A$ is singular (non-invertible). The inverse $A^{-1}$ does not exist, so the formula $X=A^{-1}B$ cannot be used. In this case, the system may have:

When $\det(A)=0$, the matrix inverse method cannot directly provide the solution. Other methods, such as Gaussian elimination (row reduction of the augmented matrix $[A | B]$), are needed to determine whether the system is inconsistent or has infinitely many solutions.


Example of Solving a System using Matrix Inverse Method

Example 3. Solve the following system of linear equations using the matrix inverse method:

$2x + 3y = 7$

$x + 2y = 4$

Answer:

First, write the given system of linear equations in the matrix form $AX = B$:

$$ \begin{bmatrix} 2 & 3 \\ 1 & 2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 7 \\ 4 \end{bmatrix} $$

Here, the coefficient matrix is $A = \begin{bmatrix} 2 & 3 \\ 1 & 2 \end{bmatrix}$, the variable matrix is $X = \begin{bmatrix} x \\ y \end{bmatrix}$, and the constant matrix is $B = \begin{bmatrix} 7 \\ 4 \end{bmatrix}$.

To use the matrix inverse method, we first check if the coefficient matrix $A$ is invertible by calculating its determinant.

1. Calculate the determinant of $A$:

$$ \det(A) = \begin{vmatrix} 2 & 3 \\ 1 & 2 \end{vmatrix} = (2 \times 2) - (1 \times 3) = 4 - 3 = 1 $$

Since $\det(A) = 1$, which is non-zero, the matrix $A$ is invertible, and the system has a unique solution.

2. Find the inverse of matrix $A$, $A^{-1}$. We can use the adjoint method for a $2 \times 2$ matrix. For a $2 \times 2$ matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$, the adjoint is $\text{adj} = \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}$.

For $A = \begin{bmatrix} 2 & 3 \\ 1 & 2 \end{bmatrix}$, the adjoint is $\text{adj}(A) = \begin{bmatrix} 2 & -3 \\ -1 & 2 \end{bmatrix}$.

The inverse is given by $A^{-1} = \frac{1}{\det(A)} \text{adj}(A)$. Substitute $\det(A) = 1$ and $\text{adj}(A) = \begin{bmatrix} 2 & -3 \\ -1 & 2 \end{bmatrix}$:

$$ A^{-1} = \frac{1}{1} \begin{bmatrix} 2 & -3 \\ -1 & 2 \end{bmatrix} = \begin{bmatrix} 2 & -3 \\ -1 & 2 \end{bmatrix} $$

3. Use the formula $X = A^{-1}B$ to find the solution for the variable matrix $X$:

$$ X = \begin{bmatrix} x \\ y \end{bmatrix} = A^{-1}B = \begin{bmatrix} 2 & -3 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} 7 \\ 4 \end{bmatrix} $$

Perform the matrix multiplication of the $2 \times 2$ matrix $A^{-1}$ and the $2 \times 1$ matrix $B$. The resulting matrix will be of order $2 \times 1$.

$$ \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} (2 \times 7) + (-3 \times 4) \\ (-1 \times 7) + (2 \times 4) \end{bmatrix} = \begin{bmatrix} 14 - 12 \\ -7 + 8 \end{bmatrix} = \begin{bmatrix} 2 \\ 1 \end{bmatrix} $$

Equating the elements of the variable matrix $X$ to the elements of the resulting matrix:

$$ x = 2 $$ $$ y = 1 $$

The solution to the system of equations is $x=2$ and $y=1$.

Answer: The solution to the system is $x = 2, y = 1$.

Verification:

Substitute $x=2$ and $y=1$ back into the original equations:

Equation 1: $2x + 3y = 2(2) + 3(1) = 4 + 3 = 7$. This matches the right side of the first equation.

Equation 2: $x + 2y = 1(2) + 2(1) = 2 + 2 = 4$. This matches the right side of the second equation.

The solution is correct.

The matrix inverse method provides a direct formula for solving systems of linear equations when the coefficient matrix is invertible. It is a powerful theoretical tool and can be efficient for manually solving small systems or conceptually understanding the nature of solutions.



Solving Systems of Linear Equations using Determinants (Cramer's Rule)

In addition to the matrix inverse method, another method for solving systems of linear equations that utilizes determinants is called Cramer's Rule. This rule provides a direct formula for the value of each variable in the system, provided that the determinant of the coefficient matrix is non-zero.


Cramer's Rule Statement

Consider a system of $n$ linear equations in $n$ variables $x_1, x_2, \dots, x_n$, represented in matrix form as $AX = B$. Here, $A$ is the $n \times n$ coefficient matrix, $X$ is the $n \times 1$ column matrix of variables, and $B$ is the $n \times 1$ column matrix of constant terms. For Cramer's rule to be applicable, $A$ must be a square matrix.

If the determinant of the coefficient matrix $A$ is non-zero ($\det(A) \neq 0$), then the system of equations has a unique solution. The value of each variable $x_i$ in this unique solution is given by a ratio of two determinants:

$x_i = \frac{\det(A_i)}{\det(A)} $

... (vii)

where:

Cramer's Rule for a System of 2 Linear Equations:

Let's apply Cramer's rule to a system of two linear equations in two variables $x$ and $y$:

$$ a_{11}x + a_{12}y = b_1 $$ $$ a_{21}x + a_{22}y = b_2 $$

In matrix form, this system is $\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}$.

The coefficient matrix is $A = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}$, and the constant matrix is $B = \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}$.

The determinant of the coefficient matrix is $\det(A) = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix} = a_{11}a_{22} - a_{21}a_{12}$.

To find $x$, we replace the 1st column of $A$ with $B$ to form $A_1$:

$$ A_1 = \begin{bmatrix} b_1 & a_{12} \\ b_2 & a_{22} \end{bmatrix} $$

The determinant of $A_1$ is $\det(A_1) = \begin{vmatrix} b_1 & a_{12} \\ b_2 & a_{22} \end{vmatrix} = b_1a_{22} - b_2a_{12}$.

To find $y$, we replace the 2nd column of $A$ with $B$ to form $A_2$:

$$ A_2 = \begin{bmatrix} a_{11} & b_1 \\ a_{21} & b_2 \end{bmatrix} $$

The determinant of $A_2$ is $\det(A_2) = \begin{vmatrix} a_{11} & b_1 \\ a_{21} & b_2 \end{vmatrix} = a_{11}b_2 - a_{21}b_1$.

Cramer's Rule gives the solution as:

$x = \frac{\det(A_1)}{\det(A)} = \frac{\begin{vmatrix} b_1 & a_{12} \\ b_2 & a_{22} \end{vmatrix}}{\begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix}} $

... (viii)

$y = \frac{\det(A_2)}{\det(A)} = \frac{\begin{vmatrix} a_{11} & b_1 \\ a_{21} & b_2 \end{vmatrix}}{\begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix}} $

... (ix)

These formulas are valid only if $\det(A) \neq 0$.

Conditions for Solutions using Determinants:

For a system of $n$ linear equations in $n$ variables $AX = B$:

1. If $\det(A) \neq 0$: The coefficient matrix $A$ is non-singular. The system is consistent and has a unique solution. This unique solution is given by Cramer's rule: $x_i = \frac{\det(A_i)}{\det(A)}$ for $i=1, 2, \ldots, n$.

2. If $\det(A) = 0$: The coefficient matrix $A$ is singular. In this case, Cramer's rule cannot be directly applied (as it involves division by zero). The system is either inconsistent (no solution) or consistent with infinitely many solutions. To distinguish between these possibilities using determinants and adjoints:

Cramer's Rule is most effectively used when $\det(A) \neq 0$ to find the unique solution directly. For larger systems (more than 3 variables), calculating the many determinants involved can be more computationally intensive than other methods like Gaussian elimination.


Example of Solving a System using Cramer's Rule

Example 4. Solve the following system of linear equations using Cramer's Rule:

$x + 2y = 10$

$3x - y = 2$

Answer:

First, write the given system of linear equations in matrix form $AX = B$:

$$ \begin{bmatrix} 1 & 2 \\ 3 & -1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 10 \\ 2 \end{bmatrix} $$

Here, the coefficient matrix is $A = \begin{bmatrix} 1 & 2 \\ 3 & -1 \end{bmatrix}$, the variable matrix is $X = \begin{bmatrix} x \\ y \end{bmatrix}$, and the constant matrix is $B = \begin{bmatrix} 10 \\ 2 \end{bmatrix}$.

1. Calculate the determinant of the coefficient matrix $A$.

$$ \det(A) = \begin{vmatrix} 1 & 2 \\ 3 & -1 \end{vmatrix} = (1 \times -1) - (3 \times 2) = -1 - 6 = -7 $$

Since $\det(A) = -7$, which is non-zero, the system has a unique solution, and we can apply Cramer's Rule.

2. Create the matrices $A_1$ and $A_2$ by replacing the columns of $A$ with the constant matrix $B$, and calculate their determinants.

To find $x$, replace the 1st column of $A$ with $B$ to form $A_1$:

$$ A_1 = \begin{bmatrix} 10 & 2 \\ 2 & -1 \end{bmatrix} $$

Calculate the determinant of $A_1$:

$$ \det(A_1) = \begin{vmatrix} 10 & 2 \\ 2 & -1 \end{vmatrix} = (10 \times -1) - (2 \times 2) = -10 - 4 = -14 $$

To find $y$, replace the 2nd column of $A$ with $B$ to form $A_2$:

$$ A_2 = \begin{bmatrix} 1 & 10 \\ 3 & 2 \end{bmatrix} $$

Calculate the determinant of $A_2$:

$$ \det(A_2) = \begin{vmatrix} 1 & 10 \\ 3 & 2 \end{vmatrix} = (1 \times 2) - (3 \times 10) = 2 - 30 = -28 $$

3. Use Cramer's Rule formulas $x = \frac{\det(A_1)}{\det(A)}$ and $y = \frac{\det(A_2)}{\det(A)}$ to find the values of $x$ and $y$.

$$ x = \frac{\det(A_1)}{\det(A)} = \frac{-14}{-7} = 2 $$ $$ y = \frac{\det(A_2)}{\det(A)} = \frac{-28}{-7} = 4 $$

The solution to the system is $x=2$ and $y=4$.

Answer: The solution is $x = 2, y = 4$.

Verification:

Substitute $x=2$ and $y=4$ back into the original equations:

Equation 1: $x + 2y = (2) + 2(4) = 2 + 8 = 10$. This matches the right side of the first equation.

Equation 2: $3x - y = 3(2) - (4) = 6 - 4 = 2$. This matches the right side of the second equation.

The solution is correct.


Example 5. (Cases where Cramer's Rule may not provide a unique solution)

Example 5. Discuss the nature of solutions for the following systems of linear equations using determinants:

System A: $x + y = 5$

$2x + 2y = 10$

System B: $x + y = 5$

$x + y = 6$

Answer:

System A:

The system is $x + y = 5$ and $2x + 2y = 10$. In matrix form:

$$ \begin{bmatrix} 1 & 1 \\ 2 & 2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 5 \\ 10 \end{bmatrix} $$

The coefficient matrix is $A = \begin{bmatrix} 1 & 1 \\ 2 & 2 \end{bmatrix}$, and the constant matrix is $B = \begin{bmatrix} 5 \\ 10 \end{bmatrix}$.

Calculate $\det(A)$:

$$ \det(A) = \begin{vmatrix} 1 & 1 \\ 2 & 2 \end{vmatrix} = (1 \times 2) - (2 \times 1) = 2 - 2 = 0 $$

Since $\det(A) = 0$, Cramer's Rule does not apply to find a unique solution. We need to check $\det(A_1)$ and $\det(A_2)$.

Replace 1st column of $A$ with $B$ to get $A_1$:

$$ A_1 = \begin{bmatrix} 5 & 1 \\ 10 & 2 \end{bmatrix} $$ $$ \det(A_1) = \begin{vmatrix} 5 & 1 \\ 10 & 2 \end{vmatrix} = (5 \times 2) - (10 \times 1) = 10 - 10 = 0 $$

Replace 2nd column of $A$ with $B$ to get $A_2$:

$$ A_2 = \begin{bmatrix} 1 & 5 \\ 2 & 10 \end{bmatrix} $$ $$ \det(A_2) = \begin{vmatrix} 1 & 10 \\ 2 & 10 \end{vmatrix} = (1 \times 10) - (2 \times 5) = 10 - 10 = 0 $$

Since $\det(A) = 0$ and all $\det(A_i) = 0$ ($\det(A_1)=0, \det(A_2)=0$), the system is either consistent with infinitely many solutions or inconsistent. Looking at the original equations, the second equation $2x+2y=10$ can be divided by 2 to get $x+y=5$, which is identical to the first equation. The two equations represent the same line. Thus, there are infinitely many solutions.

Conclusion for System A: $\det(A)=0$, $\det(A_1)=0$, $\det(A_2)=0$. The system is consistent with infinitely many solutions.

System B:

The system is $x + y = 5$ and $x + y = 6$. In matrix form:

$$ \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 5 \\ 6 \end{bmatrix} $$

The coefficient matrix is $A = \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$, and the constant matrix is $B = \begin{bmatrix} 5 \\ 6 \end{bmatrix}$.

Calculate $\det(A)$:

$$ \det(A) = \begin{vmatrix} 1 & 1 \\ 1 & 1 \end{vmatrix} = (1 \times 1) - (1 \times 1) = 1 - 1 = 0 $$

Since $\det(A) = 0$, Cramer's Rule does not apply for a unique solution. We check $\det(A_1)$ and $\det(A_2)$.

Replace 1st column of $A$ with $B$ to get $A_1$:

$$ A_1 = \begin{bmatrix} 5 & 1 \\ 6 & 1 \end{bmatrix} $$ $$ \det(A_1) = \begin{vmatrix} 5 & 1 \\ 6 & 1 \end{vmatrix} = (5 \times 1) - (6 \times 1) = 5 - 6 = -1 $$

Replace 2nd column of $A$ with $B$ to get $A_2$:

$$ A_2 = \begin{bmatrix} 1 & 5 \\ 1 & 6 \end{bmatrix} $$ $$ \det(A_2) = \begin{vmatrix} 1 & 5 \\ 1 & 6 \end{vmatrix} = (1 \times 6) - (1 \times 5) = 6 - 5 = 1 $$

Since $\det(A) = 0$, but $\det(A_1) \neq 0$ and $\det(A_2) \neq 0$, the system is inconsistent (no solution). Geometrically, the equations represent two parallel lines that are distinct.

Conclusion for System B: $\det(A)=0$, $\det(A_1) \neq 0$, $\det(A_2) \neq 0$. The system is inconsistent (no solution).

Cramer's Rule is a valuable method for finding the unique solution of a system of $n$ equations in $n$ variables when the coefficient matrix is non-singular. When the determinant is zero, it indicates that there is no unique solution, and further analysis is required to determine if there are no solutions or infinitely many solutions.